Goto

Collaborating Authors

 rayleigh quotient


UniGAD: Unifying Multi-level Graph Anomaly Detection Yiqing Lin 1, Jianheng Tang

Neural Information Processing Systems

Graph Anomaly Detection (GAD) aims to identify uncommon, deviated, or suspicious objects within graph-structured data. Existing methods generally focus on a single graph object type (node, edge, graph, etc.) and often overlook the inherent connections among different object types of graph anomalies. For instance, a money laundering transaction might involve an abnormal account and the broader community it interacts with. To address this, we present UniGAD, the first unified framework for detecting anomalies at node, edge, and graph levels jointly. Specifically, we develop the Maximum Rayleigh Quotient Subgraph Sampler (MRQSampler) that unifies multi-level formats by transferring objects at each level into graph-level tasks on subgraphs.


45d74e190008c7bff2845ffc8e3facd3-Supplemental-Conference.pdf

Neural Information Processing Systems

In a typical supervised learning task, one is given a training dataset ofn N labeled samplesD = ((xi,yi) Rd R)i [n], and a parametric model withm N parameters, f:Rm Rd R. The task istofind parameters fitting the training data, i.e. findθ Rm such that i [n],f(θ;xi) yi.



Convergence beyond the over-parameterized regime using Rayleigh quotients

Neural Information Processing Systems

In this paper, we present a new strategy to prove the convergence of Deep Learning architectures to a zero training (or even testing) loss by gradient flow. Our analysis is centered on the notion of Rayleigh quotients in order to prove Kurdyka-Lojasiewicz inequalities for a broader set of neural network architectures and loss functions. We show that Rayleigh quotients provide a unified view for several convergence analysis techniques in the literature. Our strategy produces a proof of convergence for various examples of parametric learning. In particular, our analysis does not require the number of parameters to tend to infinity, nor the number of samples to be finite, thus extending to test loss minimization and beyond the over-parameterized regime.


UniGAD: Unifying Multi-level Graph Anomaly Detection Yiqing Lin 1, Jianheng Tang

Neural Information Processing Systems

Graph Anomaly Detection (GAD) aims to identify uncommon, deviated, or suspicious objects within graph-structured data. Existing methods generally focus on a single graph object type (node, edge, graph, etc.) and often overlook the inherent connections among different object types of graph anomalies. For instance, a money laundering transaction might involve an abnormal account and the broader community it interacts with. To address this, we present UniGAD, the first unified framework for detecting anomalies at node, edge, and graph levels jointly. Specifically, we develop the Maximum Rayleigh Quotient Subgraph Sampler (MRQSampler) that unifies multi-level formats by transferring objects at each level into graph-level tasks on subgraphs.


AT able of Notation Description k Number of latent dimensions in hidden layer of autoencoder m Number of dimensions of input data n Number of datapoints W

Neural Information Processing Systems

Table 1: Summary of notation used in this manuscript, ordered according to introduction in main text. This can be justified by the following Lemma, Lemma 1. The proof is a simple application of the chain rule and Taylor's theorem. Thus, we need only compute the second derivative of the regularization terms. We proceed to take derivatives.


RAE: A Neural Network Dimensionality Reduction Method for Nearest Neighbors Preservation in Vector Search

Zhang, Han, Zhao, Dongfang

arXiv.org Artificial Intelligence

While high-dimensional embedding vectors are being increasingly employed in various tasks like Retrieval-Augmented Generation and Recommendation Systems, popular dimensionality reduction (DR) methods such as PCA and UMAP have rarely been adopted for accelerating the retrieval process due to their inability of preserving the nearest neighbor (NN) relationship among vectors. Empowered by neural networks' optimization capability and the bounding effect of Rayleigh quotient, we propose a Regularized Auto-Encoder (RAE) for k-NN preserving dimensionality reduction. RAE constrains the network parameter variation through regularization terms, adjusting singular values to control embedding magnitude changes during reduction, thus preserving k-NN relationships. We provide a rigorous mathematical analysis demonstrating that regularization establishes an upper bound on the norm distortion rate of transformed vectors, thereby offering provable guarantees for k-NN preservation. With modest training overhead, RAE achieves superior k-NN recall compared to existing DR approaches while maintaining fast retrieval efficiency. V ector embeddings have become the cornerstone of modern AI systems, enabling sophisticated semantic understanding across diverse domains including Information Retrieval (Zhu et al. (2023)), Recommendation Systems (Zhao et al. (2024)), and Retrieval-Augmented Generation (RAG) pipelines (Gao et al. (2023)).


Convergence beyond the over-parameterized regime using Rayleigh quotients

Neural Information Processing Systems

In this paper, we present a new strategy to prove the convergence of deep learning architectures to a zero training (or even testing) loss by gradient flow. Our analysis is centered on the notion of Rayleigh quotients in order to prove Kurdyka-Łojasiewicz inequalities for a broader set of neural network architectures and loss functions.


Solving engineering eigenvalue problems with neural networks using the Rayleigh quotient

Rowan, Conor, Evans, John, Maute, Kurt, Doostan, Alireza

arXiv.org Artificial Intelligence

From characterizing the speed of a thermal system's response to computing natural modes of vibration, eigenvalue analysis is ubiquitous in engineering. In spite of this, eigenvalue problems have received relatively little treatment compared to standard forward and inverse problems in the physics-informed machine learning literature. In particular, neural network discretizations of solutions to eigenvalue problems have seen only a handful of studies. Owing to their nonlinearity, neural network discretizations prevent the conversion of the continuous eigenvalue differential equation into a standard discrete eigenvalue problem. In this setting, eigenvalue analysis requires more specialized techniques. Using a neural network discretization of the eigenfunction, we show that a variational form of the eigenvalue problem called the "Rayleigh quotient" in tandem with a Gram-Schmidt orthogonalization procedure is a particularly simple and robust approach to find the eigenvalues and their corresponding eigenfunctions. This method is shown to be useful for finding sets of harmonic functions on irregular domains, parametric and nonlinear eigenproblems, and high-dimensional eigenanalysis. We also discuss the utility of harmonic functions as a spectral basis for approximating solutions to partial differential equations. Through various examples from engineering mechanics, the combination of the Rayleigh quotient objective, Gram-Schmidt procedure, and the neural network discretization of the eigenfunction is shown to offer unique advantages for handling continuous eigenvalue problems.


Variational quantum and neural quantum states algorithms for the linear complementarity problem

De, Saibal, Knitter, Oliver, Kodati, Rohan, Jayakumar, Paramsothy, Stokes, James, Veerapaneni, Shravan

arXiv.org Artificial Intelligence

Variational quantum algorithms (VQAs) are promising hybrid quantum-classical methods designed to leverage the computational advantages of quantum computing while mitigating the limitations of current noisy intermediate-scale quantum (NISQ) hardware. Although VQAs have been demonstrated as proofs of concept, their practical utility in solving real-world problems -- and whether quantum-inspired classical algorithms can match their performance -- remains an open question. We present a novel application of the variational quantum linear solver (VQLS) and its classical neural quantum states-based counterpart, the variational neural linear solver (VNLS), as key components within a minimum map Newton solver for a complementarity-based rigid body contact model. We demonstrate using the VNLS that our solver accurately simulates the dynamics of rigid spherical bodies during collision events. These results suggest that quantum and quantum-inspired linear algebra algorithms can serve as viable alternatives to standard linear algebra solvers for modeling certain physical systems.